11 research outputs found

    Temporal Convolution Networks for Real-Time Abdominal Fetal Aorta Analysis with Ultrasound

    Full text link
    The automatic analysis of ultrasound sequences can substantially improve the efficiency of clinical diagnosis. In this work we present our attempt to automate the challenging task of measuring the vascular diameter of the fetal abdominal aorta from ultrasound images. We propose a neural network architecture consisting of three blocks: a convolutional layer for the extraction of imaging features, a Convolution Gated Recurrent Unit (C-GRU) for enforcing the temporal coherence across video frames and exploiting the temporal redundancy of a signal, and a regularized loss function, called \textit{CyclicLoss}, to impose our prior knowledge about the periodicity of the observed signal. We present experimental evidence suggesting that the proposed architecture can reach an accuracy substantially superior to previously proposed methods, providing an average reduction of the mean squared error from 0.31mm20.31 mm^2 (state-of-art) to 0.09mm20.09 mm^2, and a relative error reduction from 8.1%8.1\% to 5.3%5.3\%. The mean execution speed of the proposed approach of 289 frames per second makes it suitable for real time clinical use.Comment: 10 pages, 2 figure

    Box2Poly: Memory-Efficient Polygon Prediction of Arbitrarily Shaped and Rotated Text

    Full text link
    Recently, Transformer-based text detection techniques have sought to predict polygons by encoding the coordinates of individual boundary vertices using distinct query features. However, this approach incurs a significant memory overhead and struggles to effectively capture the intricate relationships between vertices belonging to the same instance. Consequently, irregular text layouts often lead to the prediction of outlined vertices, diminishing the quality of results. To address these challenges, we present an innovative approach rooted in Sparse R-CNN: a cascade decoding pipeline for polygon prediction. Our method ensures precision by iteratively refining polygon predictions, considering both the scale and location of preceding results. Leveraging this stabilized regression pipeline, even employing just a single feature vector to guide polygon instance regression yields promising detection results. Simultaneously, the leverage of instance-level feature proposal substantially enhances memory efficiency (>50% less vs. the state-of-the-art method DPText-DETR) and reduces inference speed (>40% less vs. DPText-DETR) with minor performance drop on benchmarks

    Parallelization of the algorithm WHAM with NVIDIA CUDA.

    Get PDF
    The aim of my thesis is to parallelize the Weighting Histogram Analysis Method (WHAM), which is a popular algorithm used to calculate the Free Energy of a molucular system in Molecular Dynamics simulations. WHAM works in post processing in cooperation with another algorithm called Umbrella Sampling. Umbrella Sampling has the purpose to add a biasing in the potential energy of the system in order to force the system to sample a specific region in the configurational space. Several N independent simulations are performed in order to sample all the region of interest. Subsequently, the WHAM algorithm is used to estimate the original system energy starting from the N atomic trajectories. The parallelization of WHAM has been performed through CUDA, a language that allows to work in GPUs of NVIDIA graphic cards, which have a parallel achitecture. The parallel implementation may sensibly speed up the WHAM execution compared to previous serial CPU imlementations. However, the WHAM CPU code presents some temporal criticalities to very high numbers of interactions. The algorithm has been written in C++ and executed in UNIX systems provided with NVIDIA graphic cards. The results were satisfying obtaining an increase of performances when the model was executed on graphics cards with compute capability greater. Nonetheless, the GPUs used to test the algorithm is quite old and not designated for scientific calculations. It is likely that a further performance increase will be obtained if the algorithm would be executed in clusters of GPU at high level of computational efficiency. The thesis is organized in the following way: I will first describe the mathematical formulation of Umbrella Sampling and WHAM algorithm with their apllications in the study of ionic channels and in Molecular Docking (Chapter 1); then, I will present the CUDA architectures used to implement the model (Chapter 2); and finally, the results obtained on model systems will be presented (Chapter 3)

    Experimental-analytical evaluation of sustainable syngasbiodiesel CHP systems based on oleaginous crop rotation

    No full text
    This work is aimed at investigating how the solutions adopted for the SRF (short rotational forestry) can be applied to oleaginous cultures for bioenergy production with a dual fuel diesel engine. The method is based on four sub-systems: a seed press for oil production, a downdraft gasifier, a biodiesel conversion plant and a dual fuel biodiesel IC engine for CHP (combined heat and power) production. The plant is analytically modeled except for the IC engine that was tested via experimental analysis. Results showed that, in the hypothesis of 8000 hours/year of power plant run, a surface of 27 hectares can supply enough syngas and biodiesel to run a CHP unit with nominal electrical power of 13.61 kW. Moreover, the experimental analysis outlined how the engine running with dual fuel is not almost affected by significant losses in its performance. Besides, the use of syngas yields strong benefits in terms of soot emissions (measured by an opacimeter), as well as in terms of brake fuel conversion efficiency

    Experimental investigation on a Common Rail Diesel engine partially fuelled by syngas

    No full text
    The high efficiency, reliability and flexibility of modern passenger car Diesel engines makes these power units quite attractive for steady power plants totally or partially running on fuels derived from biomass, in particular on syngas. The engine cost, which is obviously higher than that of current industrial engines, may not be a big obstacle, provided that the re-engineering work is limited and that performance and efficiency are enhanced. The goal of this work is to explore the potential of a current automotive turbocharged Diesel engine running on both Diesel fuel and syngas, by means of a comprehensive experimental investigation focused on the combustion process. The engine is operated at the most typical speed employed in steady power plants (3000 rpm), considering three different loads (50–100–300 Nm/16–31–94 kW). For each operating condition, the syngas rate is progressively increased until it provides a maximum heating power of 85 kW, while contemporarily reducing the amount of injected Diesel oil. Maximum care is applied to guarantee a constant quality of the syngas flow throughout the tests, as well as to maintain the same engine control parameters, in particular the boost pressure. It is found that in-cylinder pressure traces do not change very much, even when drastically reducing the amount of Diesel fuel: this is a very encouraging result, because it demonstrates that there is no need to radically modify the standard stock engine design. Another promising outcome is the slight but consistent enhancement of the engine brake efficiency: the use of syngas not only reduces the consumption of Diesel oil, but it also improves the combustion quality. The authors acknowledge that this study is only a starting basis: further investigation is required to cover all the aspects related to the industrial application of this syngas-Diesel combustion concept, in particular the impact on pollutant emission and on engine durability

    A global benchmark of algorithms for segmenting the left atrium from late gadolinium-enhanced cardiac magnetic resonance imaging

    No full text
    Segmentation of medical images, particularly late gadolinium-enhanced magnetic resonance imaging (LGE-MRI) used for visualizing diseased atrial structures, is a crucial first step for ablation treatment of atrial fibrillation. However, direct segmentation of LGE-MRIs is challenging due to the varying intensities caused by contrast agents. Since most clinical studies have relied on manual, labor-intensive approaches, automatic methods are of high interest, particularly optimized machine learning approaches. To address this, we organized the 2018 Left Atrium Segmentation Challenge using 154 3D LGE-MRIs, currently the world's largest atrial LGE-MRI dataset, and associated labels of the left atrium segmented by three medical experts, ultimately attracting the participation of 27 international teams. In this paper, extensive analysis of the submitted algorithms using technical and biological metrics was performed by undergoing subgroup analysis and conducting hyper-parameter analysis, offering an overall picture of the major design choices of convolutional neural networks (CNNs) and practical considerations for achieving state-of-the-art left atrium segmentation. Results show that the top method achieved a Dice score of 93.2% and a mean surface to surface distance of 0.7 mm, significantly outperforming prior state-of-the-art. Particularly, our analysis demonstrated that double sequentially used CNNs, in which a first CNN is used for automatic region-of-interest localization and a subsequent CNN is used for refined regional segmentation, achieved superior results than traditional methods and machine learning approaches containing single CNNs. This large-scale benchmarking study makes a significant step towards much-improved segmentation methods for atrial LGE-MRIs, and will serve as an important benchmark for evaluating and comparing the future works in the field. Furthermore, the findings from this study can potentially be extended to other imaging datasets and modalities, having an impact on the wider medical imaging community
    corecore